![]() METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL (Machine-translation by Google Translate, not legal
专利摘要:
Method and apparatus for processing video signal. A method for decoding a video signal in accordance with the present invention may comprise obtaining a first motion vector of the current block; obtain a second motion vector of the current block; generating a first prediction sample for the current block; generating a second prediction sample for the current block; determining a first weight and a second weight based on index information analyzed from a bit stream; obtaining a third prediction sample of the current block; and obtaining a reconstruction sample by adding the third prediction sample and a residual sample, where the index information specifies one of a plurality of candidate weight prediction parameters, and where the maximum bit length of the index information is determined based on temporal directions of the first reference image and the second reference image. (Machine-translation by Google Translate, not legally binding) 公开号:ES2802817A2 申请号:ES202031254 申请日:2017-06-30 公开日:2021-01-21 发明作者:Bae Keun Lee 申请人:KT Corp; IPC主号:
专利说明:
[0004] Technical field [0006] The present invention relates to a method and apparatus for processing video signal. [0008] Background of the technique [0010] Today, requests for high-resolution, high-quality images have increased as high-definition (HD) and ultra-high-definition (UHD) images have increased in various fields of application. However, higher image quality and resolution data has increasing amounts of data compared to conventional image data. Therefore, when image data is transmitted using a medium such as conventional wired and wireless broadband networks, or when image data is stored using a conventional storage medium, the cost of transmission and storage increases. To solve these problems that occur with an increase in resolution and quality of image data, high-efficiency image encoding / decoding techniques can be used. [0012] Image compression technology includes various techniques, including: a prediction interprediction technique of a pixel value included in a current snapshot from an earlier or later snapshot of the current snapshot; an intra-prediction technique of predicting a pixel value included in a current snapshot using pixel information in the current snapshot; an entropy coding technique of assigning a short code to a value with a high occurrence frequency and assigning a long code to a value with a low occurrence frequency; etc. Image data can be effectively compressed using such image compression technology and can be transmitted or stored. [0014] Meanwhile, with the demands for high-resolution imaging, the demands for stereographic imaging content, which is a new imaging service, have also increased. A video compression technique is being analyzed to effectively provide stereographic image content with high resolution and ultra high resolution. [0016] Divulgation [0018] Technical problem [0020] An object of the present invention is conceived to provide a method and apparatus for efficiently predicting a target encoding / decoding block in encoding / decoding a video signal. [0022] An object of the present invention is conceived to provide a prediction method combining a plurality of prediction modes and an apparatus using the same in encoding / decoding a video signal. [0024] An object of the present invention is conceived to provide a method and apparatus for performing prediction in units of a sub-block in encoding / decoding a video signal. [0026] The technical objects to be achieved by the present invention are not limited to the aforementioned technical problems. And, other technical problems that are not mentioned will be apparent to those skilled in the art from the following description. [0028] Technical solution [0030] A method and apparatus for decoding a video signal according to the present invention can generate a first prediction block for a current block based on a first prediction mode, generate a second prediction block for a current block based on a second prediction mode and generate a final prediction block of the current block based on the first prediction block and the second prediction block. [0032] A method and apparatus for encoding a video signal according to the present invention can generate a first prediction block for a current block based on a first prediction mode, generate a second prediction block for a current block based on a second prediction mode and generating a final prediction block of the current block based on the first prediction block and the second prediction block. [0034] In the method and apparatus for encoding / decoding a video signal according to the present invention, the first prediction mode may be different from the second prediction mode. [0036] In the method and apparatus for encoding / decoding a video signal according to the present invention, the first prediction mode may be an intra-prediction mode and the second prediction mode may be an inter-prediction mode. [0038] In the method and apparatus for encoding / decoding a video signal according to the present invention, the first prediction mode may be an inter-prediction mode different from the second prediction mode, and the inter-prediction mode may comprise at least one of a jump mode, a blend mode, an AMVP (Advanced Motion Vector Prediction) mode, or a current snapshot reference mode. [0040] In the method and apparatus for encoding / decoding a video signal according to the present invention, the final prediction block can be obtained based on a weighted sum operation between the first prediction block and the second prediction block. [0042] In the method and apparatus for encoding / decoding a video signal according to the present invention, weights applied to the first prediction block and the second prediction block can be determined based on a weighted prediction parameter of the current block. [0044] In the method and apparatus for encoding / decoding a video signal according to the present invention, it can be determined whether to use a prediction method that combines the first prediction mode and the second prediction mode based on a shape or a size of the current block. [0046] In the method and apparatus for encoding / decoding a video signal according to the present invention, the current block can comprise a first sub-block and a second sub-block, a final prediction block of the first sub-block can be generated based on the first sub-block of prediction and a prediction block end of the second sub-block can be generated based on the first prediction block and the second prediction block. [0048] The features briefly summarized above for the present invention are only illustrative aspects of the detailed description of the invention that follows, but do not limit the scope of the invention. [0050] Advantageous effects [0052] According to the present invention, a target encoding / decoding block can be predicted efficiently. [0054] In accordance with the present invention, an encode / decode target block can be predicted by combining a plurality of prediction modes. [0056] In accordance with the present invention, a prediction method can be determined in units of a sub-block and a prediction can be made in units of a sub-block. [0058] The effects obtainable by the present invention are not limited to the above-mentioned effects and other effects not mentioned can be clearly understood by those skilled in the art from the description below. [0060] Description of the drawings [0062] Figure 1 is a block diagram illustrating a device for encoding a video in accordance with an embodiment of the present invention. [0064] Figure 2 is a block diagram illustrating a device for decoding a video in accordance with an embodiment of the present invention. [0066] Figure 3 is a diagram illustrating an example of hierarchical partitioning of a coding block based on a tree structure according to an embodiment of the present invention. [0068] Figure 4 is a diagram illustrating a partition type in which binary tree based partitioning is allowed according to an embodiment of the present invention. [0070] Figure 5 is a diagram illustrating an example where only one binary tree-based partition of a predetermined type is allowed in accordance with an embodiment of the present invention. [0072] Fig. 6 is a diagram for explaining an example in which information related to the allowed number of binary tree partition is encoded / decoded, according to an embodiment to which the present invention is applied. [0074] Figure 7 is a diagram illustrating a partition mode applicable to a coding block in accordance with an embodiment of the present invention. [0076] Figure 8 is a flow chart illustrating processes for obtaining a residual sample in accordance with an embodiment to which the present invention is applied. [0078] Figure 9 is a flow chart illustrating an interprediction method according to an embodiment to which the present invention is applied. [0080] Figure 10 is a diagram illustrating processes of obtaining motion information from a current block when a merge mode is applied to the current block. [0082] Figure 11 is a diagram illustrating processes of obtaining motion information from a current block when an AMVP mode is applied to the current block. [0084] Figure 12 is a flow chart of a bidirectional weighted prediction method, in accordance with one embodiment of the present invention. [0086] Figure 13 is a diagram for explaining a bidirectional weighted prediction principle. [0088] Figure 14 is a diagram illustrating a scan order between neighboring blocks. [0090] Figure 15 is a flow chart illustrating a combined prediction method in accordance with the present invention. [0091] Figures 16 and 17 are diagrams illustrating an example of generating a prediction block of a current block based on a weighted sum of a plurality of prediction blocks obtained by different prediction modes. [0093] Figure 18 is a diagram illustrating an example where prediction is performed in units of a sub-block. [0095] Figure 19 is a flow chart of a lighting compensation prediction method in accordance with the present invention. [0097] Figure 20 is a flow chart of a bidirectional weighted prediction method based on illumination compensation. [0099] FIG. 21 is a diagram illustrating an exemplary two-way weighted prediction embodiment using a prediction block to which illumination compensation is applied. [0101] Mode for invention [0103] A variety of modifications can be made to the present invention and there are various embodiments of the present invention, examples of which will now be provided with reference to the drawings and will be described in detail. However, the present invention is not limited thereto and the illustrative embodiments may be construed to include all modifications, equivalents, or substitutes in a technical concept and technical scope of the present invention. Like reference numbers refer to the like element described in the drawings. [0105] The terms used in the specification, 'first', 'second', etc., can be used to describe various components, but the components are not to be construed as being limited to the terms. The terms are used only to differentiate one component from other components. For example, the 'first' component can be named the 'second' component without departing from the scope of the present invention and the 'second' component can also be similarly named the 'first' component. The term 'and / or' includes a combination of a plurality of elements or any one of a plurality of terms. [0106] It will be understood that when an element is simply referred to as 'connecting to' or 'connecting to' another element without 'connecting directly to' or 'connecting directly to' another element in the present description, it may be 'directly connected to' or 'directly coupled to' another element or be connected to or coupled to another element, which has the other element intermediate between them. In contrast, it should be understood that when an element is referred to as being "directly coupled" or "directly connected" to another element, there are no intermediate elements present. [0108] The terms used in the present specification are used simply to describe particular embodiments, and are not intended to limit the present invention. An expression used in the singular encompasses the expression of the plural, unless it has a clearly different meaning in context. In the present specification, it is to be understood that terms such as "which includes", "which has", etc., are intended to indicate the existence of the characteristics, numbers, stages, actions, elements, parts or combinations thereof. disclosed in the specification, and are not intended to exclude the possibility that one or more other features, numbers, steps, actions, elements, parts, or combinations thereof may exist or may be added. [0110] Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Hereinafter, the same constituent elements in the drawings are indicated by the same reference numerals and a repeated description of the same elements will be omitted. [0112] Figure 1 is a block diagram illustrating a device for encoding a video in accordance with an embodiment of the present invention. [0114] Referring to Figure 1, the device 100 for encoding a video may include: a snapshot partition module 110, prediction modules 120 and 125, a transform module 130, a quantization module 135, a reorganization module 160, an entropy encoding module 165, an inverse quantization module 140, an inverse transform module 145, a filter module 150, and a memory 155. [0116] The constitutional parts shown in Figure 1 are shown independently to represent characteristic functions different from each other in the device to encode a video. Therefore, it does not mean that each constitutional part is constituted of a separate constitutional hardware or software unit. In other words, each constitutional part includes each of the constitutional parts listed for convenience. Therefore, at least two constitutional parts of each constitutional part can be combined to form a constitutional part or a constitutional part can be divided into a plurality of constitutional parts to perform each function. The embodiment where each constitutional part is combined and the embodiment where one constitutional part is divided are also included in the scope of the present invention, if they do not depart from the essence of the present invention. [0118] Also, some of the constituents may not be indispensable constituents that perform essential functions of the present invention but rather be selective constituents that only enhance the performance thereof. The present invention can be implemented by including only the constitutional parts indispensable to implement the essence of the present invention except the constituents used in improving performance. The structure including only the indispensable constituents except the selective constituents used in enhancing performance only is also within the scope of the present invention. [0120] The snapshot partition module 110 can divide an input snapshot into one or more processing units. At this point, the processing unit can be a prediction unit (PU), a transform unit (TU) or a coding unit (CU). The snapshot partition module 110 can split a snapshot into combinations of multiple encoding units, prediction units, and transform units, and can encode a snapshot by selecting a combination of encoding units, prediction units, and transform units with one criterion. default (for example, cost function). [0122] For example, a snapshot can be divided into multiple encoding units. A recursive tree structure, such as a quad tree structure, can be used to divide a snapshot into encoding units. A coding unit that is divided into other coding units with a snapshot or a larger coding unit such as a root can be divided with child nodes that correspond to the number of divided coding units. A unit of Encoding that is no longer split by a default constraint serves as a leaf node. That is, when only square partitioning is possible for one coding unit, one coding unit can be divided into four other coding units at most. [0124] Hereinafter, in the embodiment of the present invention, the encoding unit may mean a unit that performs encoding or a unit that performs decoding. [0126] A prediction unit can be one of the partitions divided into a square or rectangular shape that is the same size in a single encoding unit, or a prediction unit can be one of the divisions divided to have a different shape / size in a single encoding unit. [0128] When a prediction unit subjected to intra-prediction is generated based on a coding unit and the coding unit is not the smallest coding unit, intra-prediction can be performed without dividing the coding unit into multiple NxN prediction units. [0130] Prediction modules 120 and 125 may include an interprediction module 120 that performs interprediction and an intraprediction module 125 that performs intraprediction. It can be determined whether to perform inter-prediction or intra-prediction for the prediction unit, and detailed information (eg, an intra-prediction mode, a motion vector, a reference snapshot, etc.) can be determined according to each prediction method. At this point, the processing unit subjected to prediction may be different from the processing unit for which the detailed content and prediction method is determined. For example, the prediction method, prediction mode, etc., can be determined by the prediction unit, and prediction can be performed by the transform unit. A residual value (residual block) between the generated prediction block and an original block can be input to the transform module 130. Also, the prediction mode information, the motion vector information, etc., used for prediction can be encoded with the residual value by the entropy encoding module 165 and can be transmitted to a device to decode a video. When using a particular encoding mode, it is possible to transmit to a video decoding device encoding the original block as is without generating the prediction block through the [0133] prediction modules 120 and 125. [0135] The interprediction module 120 can predict the prediction unit based on information from at least one of a previous snapshot or a later snapshot of the current snapshot, or it can predict the prediction unit based on information from some regions encoded in the current snapshot. , in some cases. Interprediction module 120 may include a reference snapshot interpolation module, a motion prediction module, and a motion compensation module. [0137] The reference snapshot interpolation module can receive reference snapshot information from memory 155 and can generate pixel information of a whole pixel or less than the whole pixel from the reference snapshot. In the case of luminance pixels, an 8-lead DCT-based interpolation filter can be used, which has different filter coefficients to generate pixel information of a whole pixel or less than a whole pixel in units of 1/4 pixel . In the case of chrominance signals, a 4-lead DCT-based interpolation filter having different coefficient can be used to generate pixel information of a whole pixel or less than a whole pixel in units of 1/8 of a pixel. [0139] The motion prediction module can perform motion prediction based on the reference snapshot interpolated by the reference snapshot interpolation module. Various methods can be used as methods to calculate a motion vector, such as a full search based block matching algorithm (FBMA), a three-stage search (TSS), a new three-stage search algorithm (NTS), etc. The motion vector can have a motion vector value in units of 1/2 pixel or 1/4 pixel based on one interpolation pixel. The motion prediction module can predict a current prediction unit by changing the motion prediction method. Various methods can be used as motion prediction methods, such as a jump method, a merge method, an AMVP (Advanced Motion Vector Prediction) method, an intra-block copy method, etc. [0141] The intra-prediction module 125 can generate a prediction unit based on information from the reference pixel that is neighboring a current block. which is pixel information in the current snapshot. When the neighboring block of the current prediction unit is an interpredicted block and therefore a reference pixel is an interpredicted pixel, the reference pixel included in the interpredicted block can be replaced by information from reference pixel of a neighboring block subjected to intra prediction. That is, when a reference pixel is not available, at least one reference pixel of available reference pixels may be used instead of reference pixel information not available. [0143] Prediction modes in intra-prediction may include a directional prediction mode that uses reference pixel information depending on a prediction direction and a non-directional prediction mode that does not use directional information in making the prediction. A mode for predicting luminance information may be different from a mode for predicting chrominance information, and for predicting chrominance information, intra-prediction mode information may be used to predict luminance information or predicted luminance signal information. [0145] In performing intra-prediction, when the size of the prediction unit is the same as the size of the transform unit, intra-prediction can be performed on the prediction unit based on pixels located on the left, upper left and the top of the prediction unit. However, in performing intra-prediction, when the size of the prediction unit is different from the size of the transform unit, intra-prediction can be performed using a reference pixel based on the transform unit. Also, intra-prediction can be used using NxN partition for only the smallest coding unit. [0147] In the intra-prediction method, a prediction block can be generated after applying an AIS (Adaptive Intra-Smoothing) filter to a reference pixel depending on the prediction modes. The type of the AIS filter applied to the reference pixel can vary. To perform the intra-prediction method, an intra-prediction mode of the current prediction unit can be predicted from the intra-prediction mode of the neighboring prediction unit to the current prediction unit. In predicting the prediction mode of the current prediction unit using predicted mode information from the neighboring prediction unit, when the intra prediction mode of the current prediction unit is the same as the mode of intra-prediction of the neighboring prediction unit, the information indicating that the prediction modes of the current prediction unit and the neighboring prediction unit are equal to each other can be transmitted using predetermined flag information. When the prediction mode of the current prediction unit is different from the prediction mode of the neighboring prediction unit, entropy coding can be performed to encode prediction mode information of the current block. [0149] Also, a residual block may be generated that includes information on a residual value that is a difference between the prediction unit under prediction and the original prediction unit block based on prediction units generated by prediction modules 120 and 125. The generated residual block can be fed into the transform module 130. [0151] The transform module 130 can transform the residual block including the information on the residual value between the original block and the prediction unit generated by the prediction modules 120 and 125 using a transform method, such as discrete cosine transform (DCT ), discrete sine transform (DST), and KLT. It can be determined whether to apply DCT, DST or KLT to transform the residual block based on intra-prediction mode information from the prediction unit used to generate the residual block. [0153] Quantization modulus 135 can quantize values transformed to a frequency domain by transform modulus 130. Quantization coefficients can vary depending on the block or importance of a snapshot. The values calculated by quantization module 135 may be provided to inverse quantization module 140 and rearrangement module 160. [0155] The reorganization module 160 may reorganize coefficients of quantized residual values. [0157] The reorganization module 160 can change a coefficient in the form of a two-dimensional block into a coefficient in the form of a one-dimensional vector through a coefficient scanning method. For example, the reorganization module 160 can scan from a DC coefficient to a coefficient in the high frequency domain using a zigzag scan method to change the coefficients. [0160] so that they are in the form of one-dimensional vectors. Depending on the size of the transform unit and the intra-prediction mode, either vertical direction scanning where the coefficients are scanned as two-dimensional blocks in the column direction or horizontal direction scanning where the coefficients are scanned as blocks. two-dimensional in row direction instead of zigzag scan. That is, which scanning method is used between zigzag scanning, vertical direction scanning and horizontal direction scanning can be determined depending on the size of the transform unit and the intra-prediction mode. [0162] Entropy coding module 165 can perform entropy coding based on the values calculated by rearrangement module 160. Entropy coding can use various coding methods, eg, exponential Golomb coding, context-adaptive variable length coding (CAVLC) and context adaptive binary arithmetic coding (CABAC). [0164] The entropy coding module 165 can encode a variety of information, such as residual value coefficient information and block type information of the coding unit, prediction mode information, partition unit information, unit information of prediction, transform unit information, motion vector information, reference frame information, block interpolation information, filtering information, etc., from the reorganization module 160 and the prediction modules 120 and 125. [0166] Entropy encoding module 165 may entropy encode encoding unit coefficients input from reorganization module 160. [0168] The inverse quantization module 140 can inverse quantize the values quantized by the quantization module 135 and the inverse transform module 145 can inverse transform the values transformed by the transform module 130. The residual value generated by the transform module inverse quantization 140 and inverse transform module 145 can be combined with the prediction unit predicted by a motion estimation module, a motion compensation module, and the intra module. prediction of the prediction modules 120 and 125 such that a reconstructed block can be generated. [0170] Filter module 150 may include at least one of an unblocking filter, a displacement correction unit, and an adaptive loop filter (ALF). [0172] The unblocking filter can eliminate block distortion that occurs due to boundaries between blocks in the reconstructed snapshot. To determine whether to unblock, the pixels included in various rows or columns in the block can be a basis for determining whether to apply the unblock filter to the current block. When the unblocking filter is applied to the block, a strong filter or a weak filter can be applied depending on the required unblocking filtering intensity. Also, by applying the deblocking filter, horizontal direction filtering and vertical direction filtering can be processed in parallel. [0174] The offset correction module can correct offset with the original snapshot in units of one pixel in the snapshot being unlocked. To perform offset correction on a particular snapshot, it is possible to use an offset application method taking into account edge information of each pixel or a method of partitioning pixels of a snapshot into the predetermined number of regions, determine a region to submit to displacement and apply the displacement to the given region. [0176] Adaptive Loop Filtering (ALF) can be performed based on the value obtained by comparing the filtered reconstructed snapshot and the original snapshot. The pixels included in the snapshot can be divided into predetermined groups, a filter to apply to each of the groups can be determined, and filtering can be performed for each group individually. Information about whether to apply ALF and a luminance signal can be transmitted by encoding units (CU). The shape and filter coefficient of a filter for ALF can vary depending on each block. Also, the filter for ALF in the same way (fixed way) can be applied regardless of the characteristics of the target application block. [0178] The memory 155 may store the reconstructed block or snapshot calculated through the filter module 150. The stored reconstructed block or snapshot can be provided to the prediction modules 120 and 125 when performing the interplay. [0181] prediction. [0183] Figure 2 is a block diagram illustrating a device for decoding a video in accordance with an embodiment of the present invention. [0185] Referring to Figure 2, the device 200 for decoding a video can include: an entropy decoding module 210, a rearrangement module 215, an inverse quantization module 220, an inverse transform module 225, prediction modules 230, and 235, a filter module 240, and a memory 245. [0187] When a video bit stream is input from the device to encode a video, the input bit stream can be decoded according to a reverse process of the device to encode a video. [0189] The entropy decoding module 210 can perform entropy decoding in accordance with a reverse entropy encoding process by the entropy encoding module of the device for encoding a video. For example, various methods can be applied corresponding to the methods performed by the device to encode a video, such as exponential Golomb encoding, context adaptive variable length encoding (CAVLC) and context adaptive binary arithmetic encoding (CABAC). [0191] The entropy decoding module 210 can decode information about intra prediction and inter prediction made by the device to encode a video. [0193] The reorganization module 215 can perform reorganization in the entropy decoded bit stream by the entropy decoding module 210 based on the reorganization method used in the device to encode a video. The reorganization module can reconstruct and rearrange the coefficients in the form of one-dimensional vectors to the coefficient in the form of two-dimensional blocks. The reorganization module 215 can receive information related to coefficient scanning performed in the device to encode a video and can perform reorganization by a scan method in reverse of the coefficients based on the scanning order performed in the device to encode a video. . [0195] The inverse quantization module 220 can perform inverse quantization [0198] based on a quantization parameter received from the device for encoding a video and the reorganized coefficients of the block. [0200] The inverse transform module 225 can realize the reverse transform, that is, reverse DCT, reverse DST and reverse KLT, which is the reverse process of the transform, that is, DCT, DST and KLT, performed by the transform module in the quantization result by the device to encode a video. The inverse transform can be performed based on a transfer unit determined by the device for encoding a video. The inverse transform module 225 of the device for decoding a video can selectively perform transform schemes (e.g., DCT, DST, and KLT) depending on multiple pieces of information, such as the prediction method, the current block size, the prediction direction, etc. [0202] Prediction modules 230 and 235 may generate a prediction block based on prediction block generation information received from entropy decoding module 210 and previously decoded snapshot or block information received from memory 245. [0204] As described above, as the operation of the device for encoding a video, in performing intra prediction, when the size of the prediction unit is the same as the size of the transform unit, intra prediction can be performed in the unit based on the pixels to the left, top left, and top of the prediction unit. In performing intra prediction, when the size of the prediction unit is different from the size of the transform unit, intra prediction can be performed using a reference pixel based on the transform unit. Also, intra-prediction can be used using NxN partition for only the smallest coding unit. [0206] The prediction modules 230 and 235 may include a prediction unit determination module, an inter-prediction module, and an intra-prediction module. The prediction unit determination module may receive a variety of information, such as prediction unit information, prediction mode information of an intra-prediction method, information about motion prediction of an inter-prediction method, etc. From the entropy decoding module 210, you can divide a current encoding unit into units of prediction and can determine whether inter-prediction or intra-prediction is performed in the prediction unit. Using information required in interprediction of the current prediction unit received from the device for encoding a video, the interprediction module 230 can perform interprediction in the current prediction unit based on information from at least one of a previous snapshot or a subsequent snapshot of the current snapshot that includes the current prediction unit. Alternatively, inter-prediction can be performed based on information from some previously reconstructed regions in the current snapshot that includes the current prediction unit. [0208] To perform inter-prediction, it can be determined for the coding unit which of a jump mode, a blend mode, an AMVP mode, and an inter-block copy mode is used as the motion prediction method of the included prediction unit. in the encoding unit. [0210] The intra prediction module 235 can generate a prediction block based on pixel information in the current snapshot. When the prediction unit is a prediction unit subjected to intra-prediction, intra-prediction can be performed based on intra-prediction mode information of the prediction unit received from the device for encoding a video. Intra-prediction module 235 may include an adaptive intra-smoothing (AIS) filter, a reference pixel interpolation module, and a DC filter. The AIS filter performs filtering on the reference pixel of the current block and it can be determined whether to apply the filter depending on the prediction mode of the current prediction unit. AIS filtering can be performed on the reference pixel of the current block using the prediction mode of the prediction unit and the AIS filter information received from the device to encode a video. When the prediction mode of the current block is a mode where AIS filtering is not performed, the AIS filter cannot be applied. [0212] When the prediction mode of the prediction unit is a prediction mode in which intra prediction is performed based on the pixel value obtained by interpolating the reference pixel, the reference pixel interpolation module can interpolate the reference pixel to generate the reference pixel of a whole pixel or less than a whole pixel. When the prediction mode of the current prediction unit is a prediction mode in which a prediction block is generated without interpolation of the reference pixel, the reference pixel cannot be interpolated. [0215] reference. The DC filter can generate a prediction block through filtering when the prediction mode of the current block is a DC mode. [0217] The rebuilt or snapshot block may be provided to the filter module 240. The filter module 240 may include the unblocking filter, the offset correction module, and the ALF. [0219] Information on whether or not the unblocking filter is applied to the corresponding block or snapshot and the information about which of a strong filter and a weak filter is applied when the unblocking filter is applied can be received from the device to encode a video. The unblocking filter of the device for decoding a video can receive information about the unblocking filter from the device for encoding a video, and can perform unblocking filtering on the corresponding block. [0221] The offset correction module can perform offset correction on the reconstructed snapshot based on the type of offset correction and offset value information applied to a snapshot in performing encoding. [0223] The ALF can be applied to the encoding unit based on information on whether to apply the ALF, ALF coefficient information, etc., received from the device for encoding a video. The ALF information can be provided as being included in a particular parameter set. [0225] Memory 245 can store the reconstructed snapshot or block for use as a reference block or snapshot, and can provide the reconstructed snapshot to an output module. [0227] As described above, in the embodiment of the present invention, for convenience of explanation, the encoding unit is used as a term to represent a unit for encoding, but the encoding unit may serve as a unit for performing decoding as well as coding. [0229] Furthermore, a current block can represent a target block to be encoded / decoded. And, the current block may represent a coding tree block (or a coding tree unit), a coding block (or a [0232] encoding unit), a transform block (or a transform unit), a prediction block (or a prediction unit) or the like depending on an encoding / decoding step. [0234] A snapshot can be encoded / decoded by dividing into base blocks that have a square shape or a non-square shape. At this time, the base block can be referred to as a coding tree unit. The coding tree unit can be defined as a coding unit of the largest size allowed within a sequence or a segment. Information regarding whether the coding tree unit has a square or non-square shape or information regarding a size of the coding tree unit can be signaled through a set of sequence parameters, a set snapshot parameter or a segment header. The coding tree unit can be divided into partitions of smaller size. At this time, if a depth of a partition generated by dividing the coding tree unit is assumed to be 1, a depth of a partition generated by dividing the partition having the depth of 1 can be defined as 2. That is, a partition generated dividing a partition having depth k in the coding tree unit can be defined as having depth k + 1. [0236] An arbitrary size partition generated by dividing a coding tree unit can be defined as a coding unit. The coding unit can be recursively divided or divided into base units to perform prediction, quantization, transform or loop filtering and the like. For example, an arbitrary size partition generated by dividing the coding unit can be defined as a coding unit or it can be defined as a transform unit or a prediction unit, which is a base unit for performing prediction, quantization, transform, or filtering. looped and the like. [0238] The partitioning of a coding tree unit or a coding unit can be done based on at least one of a vertical line and a horizontal line. Furthermore, the number of vertical lines or horizontal lines dividing the coding tree unit or the coding unit may be at least one or more. For example, the coding tree unit or the coding unit can be divided into two partitions using a vertical line or a horizontal line, or the coding tree unit or the coding unit can be divided into three. [0241] partitions using two vertical lines or two horizontal lines. Alternatively, the coding tree unit or the coding unit can be divided into four partitions having a length and width of 1/2 using a vertical line and a horizontal line. [0243] When a coding tree unit or a coding unit is divided into a plurality of partitions using at least one vertical line or at least one horizontal line, the partitions can have a uniform size or a different size. Alternatively, any partition can be a different size than the remaining partitions. [0245] In the embodiments described below, it is assumed that a coding tree unit or a coding unit is divided into a quadruple tree structure or a binary tree structure. However, it is also possible to divide a coding tree unit or a coding unit using a greater number of vertical lines or a greater number of horizontal lines. [0246] Figure 3 is a diagram illustrating an example of hierarchical partitioning of a coding block based on a tree structure according to an embodiment of the present invention. [0248] An input video signal is decoded in predetermined block units. One such default unit for decoding the input video signal is an encoding block. The coding block can be a unit that performs intra / inter prediction, transform and quantization. In addition, a prediction mode (eg, intra-prediction mode or inter-prediction mode) is determined in units of a coding block and the prediction blocks included in the coding block may share the determined prediction mode. The coding block can be a square or non-square block having an arbitrary size in the range of 8x8 to 64x64, or it can be a square or non-square block having a size of 128x128, 256x256 or larger. [0250] Specifically, the coding block can be hierarchically divided based on at least one of a quad tree and a binary tree. At this point, quad tree based partitioning can mean dividing a 2Nx2N coding block into four NxN coding blocks, and binary tree based partitioning can mean dividing one coding block into two coding blocks. Even if partitioning based on binary tree is done, there may be a square-shaped coding block in the lower depth. [0252] Partitioning based on binary tree can be done symmetrically or asymmetrically. The coding block divided based on the binary tree can be a square block or a non-square block, such as a rectangular shape. For example, a partition type in which binary tree-based partitioning is allowed may comprise at least one of a symmetric type of 2NxN (horizontal directional non-square coding unit) or of Nx2N (vertical direction non-square coding unit ), asymmetric type of nLx2N, nRx2N, 2NxnU or 2NxnD. [0254] The binary tree based partition can be allowed limited to one of a symmetric type or an asymmetric type partition. In this case, constructing the coding tree unit with square blocks may correspond to quad-tree CU partitioning and constructing the coding tree unit with symmetric non-square blocks may correspond to binary tree partitioning. Constructing the coding tree unit with square blocks and symmetric non-square blocks can correspond to quad and binary tree CU partition. [0256] Binary tree based partitioning can be done in a coding block where quad tree based partitioning is no longer performed. Quad tree based partitioning can no longer be performed on split coding block based on binary tree. [0258] Additionally, the partition of a lower depth can be determined depending on a type of partition of a higher depth. For example, if binary tree-based partitioning is allowed at two or more depths, only the same type as the binary tree partition from the top depth can be allowed at the bottom depth. For example, if binary tree-based partitioning is performed at the top depth with type 2NxN, then binary tree-based partitioning is performed at the bottom depth with type 2NxN. Alternatively, if binary tree-based partitioning is performed in the upper depth with type Nx2N, then binary tree-based partitioning is performed in the lower depth with type Nx2N as well. [0260] On the contrary, it is also possible to allow, at a lower depth, only a different type of a partition of a binary tree of a higher depth. [0261] It may be possible to limit only a specific type of binary tree-based partition to be used for the sequence, segment, coding tree unit, or coding unit. As an example, only type 2NxN or type Nx2N of binary tree based partition can be allowed for the encoding tree unit. An available partition type can be predefined in an encoder or a decoder. Or information about the available partition type or the available partition type can be encoded and then signaled through a bit stream. [0263] Figure 5 is a diagram illustrating an example where only a specific type of partition based on binary tree is allowed. Figure 5A shows an example where only binary tree based partition type Nx2N is allowed and Figure 5B shows an example where only binary tree based partition type 2NxN is allowed. To implement adaptive partitioning based on quad tree or binary tree, information indicating partition based on quadruple tree can be used, information about the size / depth of the coding block that partition based on quadruple tree is allowed, information indicating partition based on tree binary, information about the size / depth of the coding block that binary tree-based partitioning is allowed, information about the size / depth of the coding block that binary tree-based partitioning is not allowed, information about whether to perform partition-based in binary tree in a vertical direction or a horizontal direction, etc. [0265] In addition, information can be obtained on the number of times a binary tree partition is allowed, a depth at which binary tree partition is allowed, or the number of depths at which binary tree partition is allowed for a drive of coding tree or a specific coding unit. The information can be encoded in units of a coding tree unit or a coding unit, and can be transmitted to a decoder via a bit stream. [0267] For example, a syntax 'max_binary_depth_idx_minus1' indicating a maximum depth at which binary tree partitioning is allowed can be encoded / decoded through a bit stream. In this case, max_binary_depth_idx_minus1 1 can indicate the maximum depth at which binary tree partitioning is allowed. [0270] Referring to the example shown in Figure 6, in Figure 6, the binary tree partitioning has been made for a coding unit having a depth of 2 and a coding unit having a depth of 3. Consequently, when minus one of information indicating the number of times the binary tree partition has been performed in the encoding tree unit (that is, 2 times), information indicating the maximum depth to which the tree partitioning has been allowed binary in the coding tree unit (that is, depth 3) or the number of depths at which the binary tree partition has been performed in the coding tree unit (that is, 2 (depth 2 and depth 3) ) can be encoded / decoded through a bit stream. [0272] As another example, at least one piece of information can be obtained about the number of times binary tree partitioning is allowed, the depth at which binary tree partitioning is allowed, or the number of depths at which partitioning is allowed. binary tree for each sequence or each segment. For example, information can be encoded in units of a sequence, a snapshot, or a segment unit and transmitted through a stream of bits. Therefore, at least one of the number of the binary tree partition in a first segment, the maximum depth at which the binary tree partition is allowed in the first segment, or the number of depths at which the tree partition is performed binary in the first segment can be distinguished from a second segment. For example, in the first segment, binary tree partitioning can be allowed for only one depth, while in the second segment, binary tree partitioning can be allowed for two depths. [0274] As another example, the number of times the binary tree partition is allowed, the depth at which the binary tree partition is allowed, or the number of depths at which the binary tree partition is allowed can be set differently from according to a time level identifier (TemporalID) of a segment or a snapshot. At this point, the temporal level identifier (TemporalID) is used to identify each of a plurality of video layers that have at least one view, spatial, temporal, or quality scalability. [0276] As shown in Figure 3, the first code block 300 with the partition depth (split depth) of k can be divided into multiple second code blocks based on the quadruple tree. For example, the second coding blocks 310 to 340 may be square blocks that they are half the width and half the height of the first code block, and the partition depth of the second code block can be increased to k + 1. [0278] The second coding block 310 with the partition depth of k + 1 can be divided into multiple third coding blocks with the partition depth of k + 2. The partitioning of the second coding block 310 can be performed using selectively one of the quad tree and the binary tree depending on a partitioning method. At this point, the partitioning method can be determined based on at least one of the information indicating quadruple tree based partitioning and the information indicating binary tree based partitioning. [0280] When dividing the second coding block 310 based on the quadruple tree, the second coding block 310 can be divided into four third coding blocks 310a that are half the width and half the height of the second coding block, and the depth The partition number of the third coding block 310a can be increased to k + 2. In contrast, when the second coding block 310 is divided based on the binary tree, the second coding block 310 can be divided into two third coding blocks. At this point, each of the two third code blocks can be a non-square block that is one half the width and one half the height of the second code block, and the partition depth can be increased to k + 2. The second coding block can be determined as a non-square block of a horizontal direction or a vertical direction depending on a partition direction, and the partition direction can be determined based on the information about whether binary tree-based partitioning is performed in one direction. vertical or a horizontal direction. [0282] Meanwhile, the second coding block 310 can be determined as a leaf coding block that is no longer divided based on the quad tree or the binary tree. In this case, the sheet coding block can be used as a prediction block or a transform block. [0284] As the partition of the second coding block 310, the third coding block 310a can be determined as a leaf coding block or it can be further divided based on the quadruple tree or the binary tree. [0287] Meanwhile, the third coding block 310b divided based on the binary tree may be further divided into coding blocks 310b-2 of a vertical direction or coding blocks 310b-3 of a horizontal direction based on the binary tree, and the depth of Partition of the relevant coding blocks can be increased to k + 3. Alternatively, the third code block 310b can be determined as a leaf code block 310b-1 that is no longer divided based on the binary tree. In this case, the coding block 310b-1 can be used as a prediction block or a transform block. However, the above partitioning process can be performed in a limited way based on at least one of: the information about the size / depth of the coding block that allows quadruple tree based partitioning, the information about the size / depth of the coding block. encoding that allows binary tree-based partitioning and block size / depth information that does not allow binary tree-based partitioning. [0289] A number of a candidate representing a size of a coding block can be limited to a predetermined number, or a size of a coding block in a predetermined unit can have a fixed value. As an example, the size of the encoding block in a stream or snapshot can be limited to be 256x256, 128x128, or 32x32. Information indicating the size of the coding block in the stream or snapshot can be signaled through a stream header or a snapshot header. [0291] As a result of partitioning based on a quad tree and a binary tree, an encoding unit can be represented as a square or rectangular shape of arbitrary size. [0293] A coding block is encoded using at least one of a jump mode, intra prediction, inter prediction, or a jump method. Once a coding block is determined, a prediction block can be determined through predictive partitioning of the coding block. The predictive partitioning of the coding block can be performed by a partition mode (Part_mode) that indicates a partition type of the coding block. A size or shape of the prediction block can be determined according to the partition mode of the coding block. For example, a certain prediction block size of [0296] According to the partition mode it can be equal to or less than a size of a coding block. [0298] Figure 7 is a diagram illustrating a partition mode that can be applied to a coding block when the coding block is interpredictedly encoded. [0300] When a coding block is encoded by inter-prediction, one of 8 partition modes can be applied to the coding block, as in the example shown in Figure 4. [0302] When encoding a coding block by intra prediction, a PART_2Nx2N partition mode or a PART_NxN partition mode may be applied to the coding block. [0304] PART_NxN can be applied when an encoding block has a minimum size. At this point, the minimum encoding block size can be predefined in an encoder and in a decoder. Or, information regarding the minimum size of the coding block may be signaled by a bit stream. For example, the minimum size of the coding block can be signaled through a segment header, so that the minimum size of the coding block can be defined for each segment. [0306] In general, a prediction block can be 64x64 to 4x4 in size. However, when a coding block is encoded by interprediction, the prediction block may be restricted from being 4x4 in size to reduce memory bandwidth when performing motion compensation. [0308] Figure 8 is a flow chart illustrating processes for obtaining a residual sample in accordance with an embodiment to which the present invention is applied. [0310] First, a residual coefficient can be obtained from a current block S810. A decoder can obtain a residual coefficient through a coefficient scan method. For example, the decoder can perform a coefficient scan using a zigzag scan, a vertical scan, or a horizontal scan, and can obtain residual coefficients in a form of a two-dimensional block. [0311] An inverse quantization can be performed on the residual coefficient of the current block S820. [0313] An inverse transform is selectively performed according to whether to skip the inverse transform on the dequantized residual coefficient of the current block S830. Specifically, the decoder can determine whether the inverse transform is skipped in at least one of a horizontal direction or a vertical direction of the current block. When it is determined to apply the inverse transform in at least one of the horizontal direction or the vertical direction of the current block, a residual sample of the current block can be obtained by inversely transforming the dequantized residual coefficient of the current block. At this point, the inverse transform can be performed using at least one of DCT, DST, and KLT. [0315] When the inverse transform is skipped in both the horizontal direction and the vertical direction of the current block, no inverse transform is performed in the horizontal direction and the vertical direction of the current block. In this case, the residual sample of the current block can be obtained by scaling the dequantized residual coefficient with a predetermined value. [0317] Skipping the inverse transform in the horizontal direction means that the inverse transform is not performed in the horizontal direction but the inverse transform is performed in the vertical direction. At this time, scaling in the horizontal direction can be performed. [0319] Skipping the inverse transform in the vertical direction means that the inverse transform is not performed in the vertical direction but rather the inverse transform is performed in the horizontal direction. At this time, scaling in the vertical direction can be performed. [0321] Whether or not an inverse transform jump technique can be used for the current block can be determined depending on a partition type of the current block. For example, if the current block is generated through a binary tree-based partition, the inverse transform jump scheme can be restricted for the current block. Accordingly, when the current block is generated through the binary tree based partition, the residual sample of the current block can be obtained by inverse transform of the current block. Also, when current block is generated via binary tree based partition, it can be omitted [0324] the encoding / decoding of information indicating whether or not to skip the inverse transform (eg transform_skip_flag). [0326] Alternatively, when the current block is generated via binary tree based partitioning, it is possible to limit the inverse transform jump scheme to at least one of the horizontal direction or the vertical direction. At this point, the direction in which the inverse transform hopping scheme is limited can be determined based on information decoded from the bit stream or it can be adaptively determined based on at least one of a current block size, a shape of the current block or an intra-prediction mode of the current block. [0328] For example, when the current block is a non-square block that has a width greater than a height, the inverse transform jump scheme can be allowed only in the vertical direction and restricted in the horizontal direction. That is, when the current block is 2NxN, the inverse transform is performed in the horizontal direction of the current block, and the reverse transform in the vertical direction can be selectively performed. [0330] On the other hand, when the current block is a non-square block having a height greater than a width, the inverse transform jump scheme can be allowed only in the horizontal direction and restricted in the vertical direction. That is, when the current block is Nx2N, the inverse transform is performed in the vertical direction of the current block, and the reverse transform in the horizontal direction can be selectively performed. [0332] In contrast to the above example, when the current block is a non-square block that has a width greater than a height, the inverse transform jump scheme can only be allowed in the horizontal direction, and when the current block is a non-square block having a height greater than a width, the inverse transform jump scheme can only be allowed in the vertical direction. [0334] The information indicating whether or not the inverse transform is skipped with respect to the horizontal direction or the information indicating whether the inverse transform is skipped with respect to the vertical direction may be signaled through a bit stream. For example, the information indicating whether or not the inverse transform is skipped in the horizontal direction is a 1-bit flag, 'hor_transform_skip_flag' and the [0337] Information indicating whether the inverse transform is skipped in the vertical direction is a 1-bit flag, 'ver_transform_skip_flag'. The encoder can encode at least one of 'hor_transform_skip_flag' or 'ver_transform_skip_flag' according to the shape of the current block. Furthermore, the decoder can determine whether or not the inverse transform is skipped in the horizontal direction or in the vertical direction using at least one of "hor_transform_skip_flag" or "ver_transform_skip_flag". [0339] You can set to skip reverse transform for any address of the current block depending on a partition type of the current block. For example, if the current block is generated through a binary tree-based partition, the inverse transform in the horizontal direction or vertical direction can be skipped. That is, if the current block is generated by binary tree based partitioning, it can be determined that the inverse transform for the current block is skipped in at least one of a horizontal direction or a vertical direction without encoding / decoding information (for example, transform_skip_flag, hor_transform_skip_flag, ver_transform_skip_flag) that indicates whether or not the inverse transform of the current block is skipped. [0341] Figure 9 is a flow chart illustrating an interprediction method according to an embodiment to which the present invention is applied. [0343] Referring to FIG. 9, movement information of a current block S910 is determined. The current block motion information may include at least one of a motion vector related to the current block, a reference snapshot index of the current block, or an interprediction address of the current block. [0345] The movement information of the current block can be obtained based on at least one of the information signaled through a bit stream or movement information of a neighboring block adjacent to the current block. [0347] Figure 10 is a diagram illustrating processes of obtaining motion information from a current block when a merge mode is applied to the current block. [0349] If the blending mode is applied to the current block, a spatial blending candidate can be obtained from a spatial neighbor block of the current block S1010. The spatial neighbor block may comprise at least one of blocks adjacent to the left, top, or corner (for example, at least one of an upper-left corner, an upper-right corner, or a lower-left corner) of the current block. [0351] The motion information of the spatial fusion candidate can be set to be the same as the motion information of the spatial neighbor block. [0353] A temporary merge candidate can be obtained from a temporary neighbor block of the current block S1020. The temporary neighbor block can mean a block included in a co-located snapshot. The co-located snapshot has a different snapshot order count (POC) than a current snapshot that includes the current block. The collocated snapshot can be determined as a snapshot that has a predefined index in a reference snapshot list, or it can be determined by an index signaled from a bit stream. The temporary neighbor block can be determined to be a block comprising coordinates in a co-located block that has the same position as the current block in the co-located snapshot, or a block adjacent to the co-located block. For example, at least one of a block that includes central coordinates of the collocated block or a block adjacent to the lower left boundary of the collocated block can be determined as the temporary neighbor block. [0355] Movement information of the temporary merge candidate can be determined based on movement information of the temporary neighbor block. For example, a motion vector of the temporal fusion candidate can be determined based on a motion vector of the temporal neighboring block. Furthermore, an interprediction address of the temporal merge candidate can be set to be the same as an interprediction address of the temporal neighbor block. However, a reference snapshot index of the temporary merge candidate can have a fixed value. For example, the reference snapshot index of the temporary merge candidate can be set to '0'. [0357] Subsequently, a fusion candidate list can be generated that includes the spatial fusion candidate and the temporal fusion candidate S1030. If the number of merge candidates included in the merge candidate list is less than a maximum number of merge candidates, a combined merger candidate that combines two or more merge candidates can be included in the merge candidate list. [0360] When generating the fusion candidate list, at least one of the fusion candidates included in the fusion candidate list may be specified based on a fusion candidate index S1040. [0362] Movement information of the current block can be set to be the same as movement information of the merge candidate specified by the merge candidate index S1050. For example, when the spatial merge candidate is selected by the merge candidate index, the motion information of the current block can be set to be the same as the motion information of the spatial neighbor block. Alternatively, when the temporary merge candidate is selected by the merge candidate index, the movement information of the current block can be set to be the same as the movement information of the temporary neighbor block. [0364] Figure 11 is a diagram illustrating processes of obtaining motion information from a current block when an AMVP mode is applied to the current block. [0366] When AMVP mode is applied to the current block, at least one of an interprediction address of the current block or a reference snapshot index can be decoded from a bit stream S1110. That is, when AMVP mode is applied, at least one of the interprediction direction or the reference snapshot index of the current block can be determined based on information encoded through the bit stream. [0368] A spatial motion vector candidate can be determined based on a motion vector of a spatial neighbor block of the current block S1120. The spatial motion vector candidate may include at least one of a first spatial motion vector candidate derived from an upper neighbor block of the current block and a second spatial motion vector candidate derived from the left neighbor block of the current block. In this document, the upper neighbor block may include at least one of blocks adjacent to an upper part or an upper right corner of the current block, and the left neighbor block of the current block may include at least one of blocks adjacent to the left or a lower left corner of the current block. A block adjacent to an upper left corner of the current block can be treated as either the upper neighbor block or the left neighbor block. [0371] When reference snapshots between the current block and the spatial neighbor block are different from each other, it is also possible to obtain the spatial motion vector by scaling the motion vector of the spatial neighbor block. [0373] A temporal motion vector candidate can be determined based on a motion vector of a neighboring temporal block of the current block S1130. When reference snapshots between the current block and the temporal neighboring block are different from each other, it is also possible to obtain the temporal motion vector by scaling the motion vector of the temporal neighboring block. [0375] A motion vector candidate list may be generated including the spatial motion vector candidate and the temporal motion vector candidate S1140. [0377] When generating the motion vector candidate list, at least one of the motion vector candidates included in the motion vector candidate list may be specified based on information specified by at least one of the motion vector candidate list. S1150 movement. [0379] The candidate motion vector specified by the information can be set as a motion vector prediction value of the current block, and a motion vector difference value can be added to the motion vector prediction value of the current block S1160. At this time, the motion vector difference value can be analyzed through the bit stream. [0381] When motion information is obtained from the current block, motion compensation can be performed for the current block based on the motion information obtained S920. More specifically, motion compensation for the current block can be performed based on the interprediction direction, the reference snapshot index, and the motion vector of the current block. [0383] The direction of interprediction can indicate N directions. In this document, N is a natural number and can be 1, 2 or 3 or more. If the interprediction direction indicates N directions, it means that interprediction of the current block is made based on N reference snapshots or N reference blocks. For example, when the interprediction direction of the current block indicates a unidirection, the inter prediction of the current block can be made based on a reference snapshot. On the other hand, when the interprediction of the current block indicates a bidirection, the interprediction of the current block can be performed using two reference snapshots or two reference blocks. [0385] It is also possible to determine whether a multi-directional prediction is allowed for the current block based on at least one of a size or shape of the current block. For example, when an encoding unit has a square shape, multi-directional prediction is allowed to encode / decode it. On the other hand, when the encoding unit has a non-square shape, only one-way prediction is allowed to encode / decode it. Contrary to the previous cases, it is also possible to establish that multidirectional prediction is allowed to encode / decode the coding unit when it has the non-square shape, and only one-way prediction is allowed to encode / decode the coding unit when it has the square shape. Alternatively, it is also possible to establish that multidirectional prediction is not allowed to encode / decode a prediction unit, when the prediction unit has the non-square shape of 4x8 or 8x4 or the like. [0387] The reference snapshot index can specify a reference snapshot to use by interpredicting the current block. Specifically, the reference snapshot index can specify any one of the reference snapshots included in the reference snapshot list. For example, when the interprediction direction of the current block is bidirectional, the reference snapshot (L0 reference snapshot) included in the L0 reference snapshot list is specified by an L0 reference snapshot index, and the reference snapshot (L1 reference snapshot) included in the L1 reference snapshot list is specified by an L1 reference snapshot index. [0389] Alternatively, a reference snapshot can be included in two or more reference snapshot lists. Therefore, even if the reference snapshot index of the reference snapshot included in the reference snapshot list L0 and the reference snapshot index of the reference snapshot included in the reference snapshot list L1 are different, orders times (snapshot order count, POC) of both reference snapshots can be the same. [0392] The motion vector can be used to specify a position of a reference block, in the reference snapshot, that corresponds to a prediction block of the current block. Interprediction of the current block can be performed based on the reference block, specified by the motion vector, at the reference snapshot. For example, an integer pixel included in the reference block or a non-integer pixel generated by interpolating integer pixels can be generated as a prediction sample of the current block. It is also possible that reference blocks specified by different motion vectors can be included in the same reference snapshot. For example, when the reference snapshot selected from the L0 reference snapshot list and the reference snapshot selected from the L1 reference snapshot list are the same, the reference block specified by an L0 motion vector and the reference block specified by a motion vector L1 can be included in the same reference snapshot. [0394] As described above, when the interprediction direction of the current block indicates two or more directions, the motion compensation for the current block can be performed based on two or more reference snapshots or two or more reference blocks. [0396] For example, when the current block is encoded with bidirectional prediction, the prediction block of the current block can be obtained based on two reference blocks obtained from two reference snapshots. Also, when the current block is encoded with bidirectional prediction, a residual block indicating the difference between an original block and the obtained prediction block based on the two reference blocks can be encoded / decoded. [0398] When using two or more reference snapshots, motion compensation for the current block can be done by applying the same or different weights to the respective reference snapshots. Hereinafter, a current block weighted prediction performing method will be described in detail in the following embodiments when the inter-prediction direction indicates two or more directions. For the sake of explanation, the interprediction direction of the current block is assumed to be bidirectional. However, even when the interprediction address of the current block indicates three or more, the following embodiment can be applied with application. Also, the motion compensation for the current block using two prediction images will be referred to as a bidirectional prediction method or a bidirectional prediction encoding / decoding method. [0400] When bidirectional prediction is applied to the current block, reference snapshots used for the bidirectional prediction of the current block can include a snapshot whose time order (snapshot order count, POC) is earlier than the current snapshot, a snapshot whose time order is after the current snapshot or the current snapshot. For example, one of two reference snapshots can be a snapshot whose time order is before the current snapshot, and the other snapshot can be a snapshot whose time order is after the current snapshot. Alternatively, one of the two reference snapshots can be the current snapshot, and the other snapshot can be a snapshot whose time order is before the current block or whose time order is after the current snapshot. Alternatively, both of the reference snapshots can have time orders before the current snapshot or they can have time orders after the current snapshot. Alternatively, both of the reference snapshots can be the current snapshot. [0402] Two prediction blocks can be generated from each of two reference snapshot lists. For example, a prediction block may be generated based on the L0 reference snapshot based on the L0 motion vector, and a prediction block may be generated based on the L1 reference snapshot based on the L1 motion vector. It is also possible that the prediction block generated by the motion vector L0 and the prediction block generated by the motion vector L1 can be generated based on the same reference snapshot. [0404] A prediction block of the current block can be obtained based on an average value of the prediction blocks generated based on both reference snapshots. For example, Equation 1 shows an example of obtaining the prediction block of the current block based on the average value of a plurality of the prediction blocks. [0406] [Equation 1] [0407] In Equation 1, P (x) indicates a final prediction sample of the current block or a bidirectionally predicted prediction sample and P n ( x ) indicates a sample value of an LN prediction block generated based on an LN reference snapshot . For example, P 0 (x) can mean a prediction sample of the generated prediction block based on the reference snapshot L0 and P 1 (x) can mean a prediction sample of the generated prediction block based on the reference snapshot L1 . That is, according to Equation 1, the final prediction block of the current block can be obtained based on the weighted sum of the plurality of the generated prediction blocks based on the plurality of the reference snapshots. At this time, a weight of a predefined fixed value in the encoder / decoder can be assigned to each prediction block. [0409] According to one embodiment of the present invention, the final prediction block of the current block is obtained based on the weighted sum of a plurality of the prediction blocks and the weight assigned to each prediction block can be variably / adaptively determined. For example, when both reference snapshots or both prediction blocks have different brightness, it is more effective to perform bidirectional prediction for the current block by applying different weights to each of the prediction blocks than to perform bidirectional prediction for the current block by averaging the prediction blocks. Hereinafter, for the convenience of explanation, the bidirectional prediction method when the weight assigned to each of the prediction blocks is variably / adaptively determined will be referred to as 'bidirectional weighted prediction'. [0411] It is also possible to determine whether or not bidirectional weighted prediction is allowed for the current block based on at least one of a size or shape of the current block. For example, if the encoding unit has a square shape, it is allowed to encode / decode it using bidirectional weighted prediction, while if the encoding unit has a non-square shape, it is not allowed to encode / decode it using prediction. bidirectional weighted. Contrary to the previous cases, it is also possible to establish that it is allowed to encode / decode the coding block using bidirectional weighted prediction when it has the non-square shape, and it is not allowed to encode / decode the coding block using bidirectional weighted prediction when it has the square shape. As an alternative, it is also possible [0414] set bidirectional weighted prediction not allowed to encode / decode the prediction unit when the prediction unit is a non-square partition having a size of 4x8 or 8x4 or the like. [0416] Figure 12 is a flow chart of a bidirectional weighted prediction method, in accordance with one embodiment of the present invention. [0418] To perform the bidirectional weighted prediction, a weighted prediction parameter can be determined for the current block S1210. The weighted prediction parameter can be used to determine a weight to apply to both reference snapshots. For example, as depicted in Figure 13, a weight of 1-wa can be applied to a prediction block generated based on a reference snapshot L0 and a weight of wa can be applied to a prediction block generated based on a reference snapshot L1. . Based on the weighted prediction parameters, the weight to apply to each prediction block S1220 is determined and a weighted sum operation of a plurality of the prediction blocks is performed based on the determined weight to generate a final predicted block of the current block S1230. For example, the final prediction block of the current block can be generated based on the following equation 2. [0420] [Equation 2] [0425] In Equation 2, w represents the weighted prediction parameter. [0427] As shown in Equation 2, the final prediction block P (x) of the current block can be obtained by assigning the weight of 1-w to the prediction block P 0 and assigning the weight of w to the prediction block P 1 . It is also possible to assign the weight of w to the prediction block P 0 and assign the weight of 1-w to the prediction block P 1 , contrary to what is shown in Equation 2. [0429] The weighted prediction parameter can be determined based on a difference in brightness between the reference snapshots or it can be determined based on a distance between the current snapshot and the reference snapshots (ie, the POC difference). As an alternative, it is also possible determining the weighted prediction parameter based on the size or shape of the current block. [0431] The weighted prediction parameter can be determined in units of a block (eg, a coding tree unit, a coding unit, a prediction unit, or a transform unit) or it can be determined in units of a segment or a snapshot. [0433] At this time, the weighted prediction parameter can be determined based on predefined candidate weighted prediction parameters. As an example, the weighted prediction parameter can be determined to be one of predefined values such as -1/4, 1/4, 3/8, 1/2, 5/8, 3/4, or 5/4. [0435] Alternatively, after determining a set of weighted prediction parameters for the current block, it is also possible to determine the weighted prediction parameter from at least one of the candidate weighted prediction parameters included in the determined weighted prediction parameter set . The weighted prediction parameter set can be determined in units of a block (for example, a coding tree unit, a coding unit, a prediction unit, or a transform unit) or it can be determined in units of a segment or a snapshot. [0437] For example, if one of the weighted prediction parameter sets w0 and w1 is selected, at least one of the candidate weighted prediction parameters included in the selected weighted prediction parameter set can be determined as the weighted prediction parameter for the block current. For example, assume 'w0 = {-1/4, 1/4, 3/8, 1/2, 5/8, 3/4, 5/4}' and 'w1 = {-3/8, 4, 3/8, 1/2, 5/8, 3/4} '. When the weighted prediction parameter set w0 is selected, the weighted prediction parameter w of the current block can be determined as one of the candidate weighted prediction parameters -1/4, 1/4, 3/8, 1/2, 5 / 8, 3/4 and 5/4 included in w0. [0439] The set of weighted prediction parameter available for the current block can be determined according to a time order or a time direction of the reference snapshot used for bidirectional prediction. The temporal order can indicate an encoding / decoding order between snapshots or it can indicate an output order (for example, POC) of the snapshots. Besides, the temporal direction can indicate whether the temporal order of the reference snapshot is before or after the current snapshot. [0441] As an example, depending on whether two reference snapshots used for bidirectional prediction have the same time order, the set of weighted prediction parameter available to the current snapshot can be determined. For example, depending on whether the L0 reference snapshot and the L1 reference snapshot are the same snapshot (i.e. the time order of the snapshots being the same) or if the L0 reference snapshot and the L1 reference snapshot are different relative to each other (ie, the snapshot time orders being different), the weighted prediction parameter set available for the current block can be variably determined. [0443] Different sets of weighted prediction parameters may mean that at least one of an absolute value, a sign, or a number of weighted prediction parameters included in each set of weighted prediction parameters are different. For example, when the time addresses of the L0 reference snapshot and the L1 reference snapshot are the same, the weighted prediction parameter set w0 = {-1/4, 1/4, 3/8, 1 / 2, 5/8, 5/4}, and when the temporal directions of the L0 reference snapshot and the L1 reference snapshot are different, the weighted prediction parameter set w1 = {-3/8, -1 can be used / 4, 1/4, 3/8, 1/2, / 8, 3/4}. [0445] As an example, depending on whether the temporal directions of the two reference snapshots used in the bidirectional prediction are the same or not, the set of weighted prediction parameter available for the current snapshot can be determined. For example, the weighted prediction parameter set available for the current block can be determined differently between when the temporal addresses of the two reference snapshots are the same and when the temporal addresses of the two reference snapshots are different. Specifically, the current block weighted prediction parameter can be determined differently according to whether or not the L0 reference snapshot and L1 reference snapshot are earlier than the current snapshot, if both the L0 reference snapshot and the L0 reference snapshot. L1 reference snapshots are or not later than the current snapshot or whether or not the temporary addresses of the L0 reference snapshot and the L1 reference snapshot are different. [0448] The number of available candidate weighted prediction parameters or the number of available weighted prediction parameter sets can be set differently for each block, each segment, or each snapshot. For example, the number of available candidate weighted prediction parameters or the number of available weighted prediction parameter sets may be signaled in units of a segment. Consequently, the number of available candidate weighted prediction parameters or the number of available weighted prediction parameter sets may be different for each segment. [0450] The weighted prediction parameter can be obtained from a neighboring block adjacent to the current block. In this document, the neighbor block adjacent to the current block may include at least one of a spatial neighbor block or a temporary neighbor block of the current block. [0452] As an example, the weighted prediction parameter of the current block may be set to a minimum value or a maximum value between weighted prediction parameters of neighboring blocks adjacent to the current block or it may be set to an average value of weighted prediction parameters of neighboring blocks. [0454] As an example, the current block weighted prediction parameter can be derived from a neighboring block located at a predetermined position between neighboring blocks adjacent to the current block. In this document, the determined position can be determined variably or fixedly. Specifically, the position of the neighboring block is determined by a current block size (for example, a coding unit, a prediction unit, or a transform unit), a position of the current block in the coding tree unit, a shape of the current block (for example, a partition type of the current block) or a partition index of the current block. Alternatively, the position of the neighboring block can be predefined in the encoder / decoder and fixedly determined. [0456] As an example, the weighted prediction parameter of the current block can be derived from a neighboring block to which the bidirectional weighted prediction between neighboring blocks adjacent to the current block is applied. Specifically, the weighted prediction parameter of the current block can be derived from a weighted prediction parameter of a first detected neighboring block to which the bidirectional weighted prediction is applied when adjacent neighboring blocks the current block is scanned in a predetermined order. Figure 14 is a diagram illustrating a scan order between neighboring blocks. In Figure 14, scanning is performed in the order of a left neighbor block, an upper neighbor block, an upper right neighbor block, a lower left neighbor block, and an upper left neighbor block, but the present invention is not limited to the example. illustrated. When scanning is performed following the predefined order, the weighted prediction parameter of the first detected neighboring block to which the bidirectional weighted prediction can be used as the weighted prediction parameter of the current block. [0458] Alternatively, when scanning in the predefined order, it is also possible to set the weighted prediction parameter of the first detected neighboring block to which the weighted bidirectional prediction is applied as the weighted prediction parameter prediction value of the current block. In this case, the weighted prediction parameter of the current block can be obtained using the prediction value of the weighted prediction parameter and the residual value of the weighted prediction parameter. [0460] As an example, it is also possible to obtain the weighted prediction parameter of the current block from a spatial or temporal neighbor block merged with motion information of the current block or from a spatial or temporal neighbor block used to obtain a prediction value of current block motion vector. [0462] It is also possible to signal information for determining the weighted prediction parameter through a bit stream. For example, the weighted prediction parameter of the current block may be determined based on at least one of information indicating a value of the weighted prediction parameter, index information that specifies one of the candidate weighted prediction parameters, or set index information that specifies one of the sets of weighted prediction parameters. [0464] In encoding and binary conversion weighted prediction parameters, the smallest binary codeword can be correlated to a weighted prediction parameter that is statistically most frequently used. For example, conversion to truncated unary binary can be performed on the weighted prediction parameter as shown in Table 1 below. Table 1 is an example in case of cMax is 6. [0466] [Table 1] [0468] [0471] The method of converting to truncated unary binary shown in Table 1 is basically the same as a method of converting to unary binary except that a conversion is performed after receiving the maximum value (cMax) of the input in advance. Table 2 shows the conversion to truncated unary binary with cMax of 13. [0473] [Table 2] [0475] [0478] During weighted prediction parameter conversion to binary, it is also possible to use different binary code words depending on whether or not the time addresses of the reference snapshots used for bidirectional prediction are the same or not. For example, Table 3 illustrates binary code words according to whether or not the temporary addresses of the L0 reference snapshot and the L1 reference snapshot are the same. [0481] [Table 3] [0483] [0486] It is also possible to determine the weight prediction parameter of the current block according to a time order difference between the current snapshot and the reference snapshot. Here, the time order difference may indicate encoding / decoding order difference between snapshots or output order difference between snapshots (for example, a POC difference value). For example, the weighted prediction parameter of the current snapshot can be determined based on at least one of the POC value difference between the current snapshot and the reference snapshot L0 (hereinafter referred to as a first reference distance) and the difference POC value between the current snapshot and the L1 reference snapshot (hereinafter referred to as a second reference distance). [0488] Specifically, the weighted prediction parameter of the current block can be determined based on a relationship between the first reference distance and the second reference distance. When the first reference distance is w and the second reference distance is h, w / (w + h) can be used as the weighted prediction parameter of the current block. For example, when the first reference distance and the second reference distance are the same, the weighted prediction parameter of the current block can be determined as 1/2. Also, when the first reference distance is 1 and the second reference distance is 3, the weighted prediction parameter of the current block can be determined as 1/4. [0489] Alternatively, when the first reference distance is w and the second reference distance is h, it is also possible to use a candidate weighted prediction parameter that has a more similar value to w / (w + h) between candidate weighted prediction parameters such as weighted prediction parameter of the current block. [0491] As an alternative, it is also possible to convert the weighted prediction parameter of the current block to binary taking into account the first reference distance and the second reference distance. Table 4 shows binary code words based on the first reference distance and the second reference distance. [0493] [Table 4] [0495] [0498] In the example shown in Table 4, when the first reference distance and the second reference distance are the same, the probability that the weighted prediction parameter will be set to 1/2 is high. As a result, the smallest codeword can be assigned 1/2 when the first reference distance [0501] and the second reference distance are the same. [0503] When the first reference distance and the second reference distance are different, the smallest binary codeword can be correlated with the weighted prediction parameter that is statistically the most frequently used. For example, when the first reference distance is greater than the second reference distance, the probability that a higher weight will be assigned to the L1 reference snapshot is high. Therefore, the smallest binary codeword can be correlated with the weighted prediction parameter greater than 1/2. On the other hand, when the first reference distance is smaller than the second reference distance, the probability that a higher weight will be assigned to the reference snapshot L0 is high. Therefore, the smallest binary codeword can be correlated to the weighted prediction parameter smaller than 1/2. [0505] Contrary to the example shown in Table 4, it is also possible to correlate the smallest binary codeword with the weighted prediction parameter smaller than 1/2 when the first reference distance is greater than the second reference distance, and correlate the smallest binary codeword with the weighted prediction parameter greater than 1/2 when the first reference distance is less than the second reference distance. [0507] It is also possible to perform prediction on a current block by combining two or more prediction modes. Combined prediction mode can be a combination of an interprediction mode and an intraprediction mode or a combination of two or more interprediction methods. In this document, inter-prediction methods may include at least one of a jump mode, a blend mode, an AMVP mode, or a current snapshot reference mode. The current snapshot reference mode represents an interprediction method using a current snapshot that includes the current block as a reference snapshot. When using the current snapshot reference mode, a prediction block of the current block can be obtained from a reconstructed area prior to the current block. It is also possible to classify the current snapshot reference mode as one of the intra-prediction modes instead of the inter-prediction mode. Alternatively, the current snapshot reference mode can be understood to be an embodiment of a jump mode, a blend mode, or an AMVP mode. Alternatively, it is also possible to construct the prediction mode combined with two or more modes of intra [0510] prediction (eg one directional prediction mode and one non-directional prediction mode or two or more directional prediction modes, etc.). [0512] Hereinafter, a method of performing prediction in the current block by combining two or more prediction modes will be described in detail. [0514] Figure 15 is a flow chart illustrating a combined prediction method in accordance with the present invention. [0516] First, based on a first prediction mode, a first prediction block can be generated for a current block S1510. Then, based on a second prediction mode, a second prediction block can be generated for the current block S1520. The first prediction mode and the second prediction mode can be different prediction modes. Either one of the first prediction block or the second prediction block can be generated by multidirectional prediction. [0518] A weighted prediction parameter can be determined for the current block S1530. Since the embodiment for determining the weighted prediction parameter has been described in detail with reference to Fig. 12, a detailed description thereof will be omitted in the present embodiment. [0520] Weights to be applied to the first prediction block and the second prediction block are determined based on the weighted prediction parameter S1540. And, a final prediction block of the current block can be generated by performing a weighted summation operation of a plurality of prediction blocks based on the determined weights S1550. [0522] Figures 16 and 17 are diagrams illustrating an example of generating a prediction block of a current block, based on a weighted sum of a plurality of prediction blocks obtained by different prediction modes. [0524] Referring to Figure 16, a prediction block P0 can be generated based on a reference snapshot L0 or a reference snapshot L1 (inter prediction) and a prediction block P1 can be generated based on coded / decoded neighbor samples prior to the current block. (intra prediction). In this case, a prediction block of the current block may be generated based on the weighted sum operation of the prediction block P0 and the prediction block P1. [0526] Referring to FIG. 17, a prediction block P0 can be generated based on a reference snapshot L0 or a reference snapshot L1, and a prediction block P1 can be generated based on the current snapshot in the current snapshot reference mode. In this case, a prediction block of the current block may be generated based on the weighted sum operation of the prediction block P0 and the prediction block P1. [0528] It can be determined based on information signaled through a bit stream whether or not to use the combined prediction method combining two or more prediction modes. For example, information indicating at least one of an intra-prediction mode, an inter-prediction mode, or a combined prediction mode may be signaled through the bit stream. It is also possible to restrict the use of the combined prediction mode depending on a size or shape of a block. For example, if the current block size is equal to or less than 8x8, or if the current block has a non-square shape, the use of the combined prediction mode can be restricted. [0530] A prediction mode of the current block can be determined in units of a sub-block. For example, when the current block is divided into N partitions, the prediction mode can be determined individually for each of the N partitions. The partition type of the current block can be symmetric or asymmetric. Accordingly, the sub-blocks can have a square shape or a non-square shape. It can be determined whether to use a single prediction mode or the combined prediction mode in units of a sub-block. [0532] At this time, the prediction mode for each sub-block can be determined by taking into account a distance from a reference sample. For example, when the current block is coded with the intra-prediction mode, a correlation with the reference sample becomes smaller as the distance from the reference sample increases. Accordingly, a correlation between a sample in the current block that is far from the reference sample (eg, a sample included in a right column or a lower row of the current block) and the reference sample can be considered to be small. Therefore, a [0535] Sub-block adjacent to the reference sample (for example, a top reference sample or a left reference sample) can be encoded / decoded by intra-prediction, and a sub-block away from the reference sample can be encoded / decoded by the combined prediction method that combines intra prediction and inter prediction. [0537] As in the example shown in Figure 18, a sub-block 0 prediction block can be generated based on the intra-prediction mode. On the other hand, a sub-block 1 prediction block may be generated based on the weighted sum operation of the first prediction block generated based on the intra-prediction mode and the second prediction block generated by the inter-prediction mode. [0539] Even if a current block is similar to a reference block in a reference snapshot, if there is a change in brightness between a current snapshot and a previous snapshot, the intra-prediction or inter-prediction efficiency may be lowered. Consequently, it is possible to consider illumination compensation to compensate for a prediction sample generated through intra-prediction or inter-prediction or a reconstructed reconstruction sample based on the prediction sample for the change in brightness between the current snapshot and the snapshot. reference. Lighting compensation can be performed by applying a lighting compensation weight and offset to an image that is encoded / decoded in intra prediction or inter prediction. For example, lighting compensation prediction can be made based on Equation 3 below. [0541] [Equation 3] [0543] p '= l xp f [0545] In Equation 3, p can indicate the predicted sample encoded / decoded by the intra prediction or the inter prediction. l indicates the lighting compensation weighting and f indicates the offset. p 'can indicate a weighted prediction sample to which illumination compensation is applied. [0547] It is also possible to apply illumination compensation to the reconstructed sample obtained based on the prediction sample encoded / decoded in the intra [0550] prediction or inter prediction. Specifically, illumination compensation can be applied to the reconstructed sample before a loop filter is applied, or to the reconstructed sample after the loop filter is applied. In this case, in Equation 3, p can indicate the reconstructed sample and p 'can indicate a weighted reconstruction sample to which illumination compensation is applied. [0552] A lighting change can occur across the entire area of a current snapshot or current segment when compared to a previous snapshot or previous segment. Therefore, lighting compensation can be done in units of a sequence, a snapshot, or a segment. [0554] Alternatively, the illumination change may only occur in a partial area within a segment or sequence when compared to a previous segment or a previous sequence. Accordingly, lighting compensation can be done in units of a predetermined area in a snapshot or a segment. That is, by determining whether or not to perform lighting compensation in units of a predetermined area, it is possible to perform lighting compensation only in a partial area, in which the lighting change occurs, in a snapshot or in a segment. [0556] When lighting compensation is performed only for the predetermined area within a snapshot or segment, information can be encoded / decoded to determine an area where lighting compensation is performed. For example, information indicating a position of the area where lighting compensation is performed, a size of the area where lighting compensation is performed, or a shape of the area where lighting compensation is performed can be encoded / decoded. [0558] Alternatively, it is also possible to encode / decode information indicating whether or not lighting compensation is performed in units of a block. The information can be a 1-bit flag, but is not limited to it. For example, it can be determined whether or not to perform illumination compensation in units of a coding tree unit, a coding unit, a prediction unit or a transform unit. Accordingly, the information indicating whether to perform illumination compensation can be determined in units of a coding tree unit, a coding unit, a prediction unit or transform unit. [0560] It is also possible to determine the area, in which the lighting compensation is performed, in a snapshot or a segment, and then determine whether to perform the lighting compensation for each of the blocks included in the area. For example, when the predetermined area includes a plurality of coding tree units, a plurality of coding units, a plurality of prediction units, or a plurality of transform units, information indicating whether or not to perform offset compensation may be signaled. lighting for each block included in the predetermined area. Accordingly, lighting compensation can be performed selectively for each of the blocks included in units of to perform lighting compensation. [0562] Based on the above description, the illumination compensation prediction method according to the present invention will be described in detail. [0564] Figure 19 is a flow chart of a lighting compensation prediction method in accordance with the present invention. [0566] First, a lighting compensation parameter can be determined for a current S1910 block. The lighting compensation parameter may include at least one of a lighting compensation weight or an offset. [0568] The illumination compensation parameter can be signaled through a bit stream in units of a sequence, a snapshot, a segment, or an encode / decode block. In this document, the unit of the coding / decoding block may represent at least one of a coding tree unit, a coding unit, a prediction unit or a transform unit. [0570] Alternatively, it is also possible to flag the lighting compensation parameter for each predetermined area where lighting compensation is performed. For example, the lighting compensation parameter can be signaled for a predetermined area that includes a plurality of blocks. The plurality of blocks included in the predetermined area can use the same [0573] lighting compensation parameter. [0575] The lighting compensation parameter can be signaled independently of an encoding mode of the current block. Alternatively, it can be determined whether or not to signal the lighting compensation parameter according to the current block encoding mode. For example, the lighting compensation parameter can be signaled only when the current block's encoding mode has a predefined mode. In this document, the encoding mode may indicate whether the current block is encoded in intra-prediction (ie, intra-prediction mode) or if the current block is encoded in inter-prediction (ie, inter-prediction mode). For example, the lighting compensation parameter can be flagged only when the current block is coded with interprediction. Alternatively, it is also possible that the encoding mode may indicate one of a jump mode, a blend mode, an AMVP mode or a current snapshot reference mode, which are inter-prediction methods of the current block. [0577] As an example, when the current block is encoded with the jump mode or the current snapshot reference mode, the lighting compensation parameter may not be flagged. On the other hand, when the current block is encoded with the fusion block or AMVP mode, the illumination compensation parameter can be signaled through the bit stream. If the lighting compensation parameter is not flagged, lighting compensation for the current block may not be performed. Alternatively, if the lighting compensation parameter is not flagged, lighting compensation for the current block can be performed using the lighting compensation parameter predefined in the encoder / decoder. [0579] The lighting compensation parameter can be obtained based on a lighting change between a first template area in the current snapshot and a second template area in the reference snapshot. The first template area can be adjacent to the current block and the second template area can be adjacent to a reference block. In this document, the reference block is used to generate the prediction block of the current block and can be specified by a motion vector of the current block. Alternatively, the second template area can have a position co-located with the first template area in the reference snapshot. The position of the second template area [0582] it can be variably determined according to the reference snapshot or encoding mode of the current block. [0584] When an unavailable sample is included in the second template area, a replacement value can be assigned to the unavailable sample using an available sample. For example, the available sample can be copied to an unavailable sample position, or an interpolated value calculated using a plurality of available samples can be assigned to the unavailable sample position. The available sample can be included in the second template region or it can be located outside of the second template region. For example, the replacement value of the unavailable sample included in the second template area can be calculated based on the available sample included in the reference block. At least one of a filter coefficient, a shape, or the number of filter taps of a filter used in interpolation can be variably determined based on at least one of a size or a shape of the template region. [0586] The illumination compensation parameter can be calculated based on a difference value between samples included in the first template region and samples included in the second template region. For example, when a neighboring sample from the current block is assumed to be yi (i is 0 to N-1) and a neighboring sample from the reference block is assumed to be xi (i is 0 to N-1), the offset weight of illumination l and displacement f can be obtained by calculating the minimum value of E (w, f) in Equation 4. [0588] [Equation 4] [0593] Equation 4 can be modified as the following Equation 5. [0595] [Equation 5] [0600] From Equation 5, Equation 6 can be obtained to obtain the illumination compensation weight l and Equation 7 to obtain the displacement f. [0601] [Equation 6] [0603] ^ N l cr. -Z c ^ i + l [0608] [Equation 7] [0613] If the lighting compensation parameter is determined, the lighting compensation for the current block can be performed using the determined lighting compensation parameter S1920. Illumination compensation can be performed by applying illumination compensation weighting and offset to a block (eg, prediction block or reconstruction block) that is encoded / decoded in intra-prediction or inter-prediction. [0615] When the interprediction direction of the current block indicates a plurality of addresses, compensation can be performed on at least one of a plurality of prediction blocks and a multidirectional prediction can be made on the current block based on the prediction block to which the prediction applies. lighting compensation. For example, if bidirectional weighted prediction is applied to the current block, lighting compensation can be performed on at least one of a first prediction block and a second prediction block, and then a final predicted block or a predicted block can be generated. bidirectionally from the current block based on the weighted sum operation between the first prediction block and the second prediction block. [0616] Figure 20 is a flow chart of a bidirectional weighted prediction method based on illumination compensation. [0618] Referring to Fig. 20, first, it can be determined whether or not lighting compensation is performed on a reference snapshot S2010. Whether or not lighting compensation is performed on the reference snapshot can be determined based on information signaled through a bit stream. The information can be a 1-bit flag, but it is not militated against. For example, pred_ic_comp_flag can indicate whether or not lighting compensation is performed on the reference snapshot. [0621] If it is determined that the illumination compensation has to be performed in a reference block, the reference snapshot at which the illumination compensation S2020 has to be performed can be determined. Specifically, when it is determined that lighting compensation is performed on the reference block, it is possible to determine whether to perform lighting compensation on a reference snapshot L0 or perform lighting compensation on a reference snapshot L1. The determination can be made based on information signaled through the bit stream. The information can specify any one of the reference snapshots. Alternatively, the information may be a plurality of 1-bit flags indicating whether or not lighting compensation is performed at each reference snapshot. For example, at least one bit stream of pred_ic_comp_l0_enalbed_flag may be signaled through the bit stream indicating whether lighting compensation is performed for reference snapshot L0 or pred_ic_comp_l1_enalged_flag indicating whether lighting compensation is performed on reference snapshot L1. [0623] If the reference snapshot in which the lighting compensation will be performed is determined, a lighting compensation parameter to be applied to the reference snapshot S2030 can be determined. Since the determination of the illumination compensation parameter has been described in detail with reference to Fig. 19, a detailed description thereof will be omitted in this embodiment. [0625] Based on the determined lighting compensation parameter, lighting compensation can be performed in a prediction block generated based on the reference snapshot on which lighting compensation S2040 is to be performed. Next, the bidirectional weighted prediction for the current block can be performed using the illumination compensated prediction block S2050. [0627] FIG. 21 is a diagram illustrating an exemplary two-way weighted prediction embodiment using a prediction block to which illumination compensation is applied. In Figure 21, it is illustrated that illumination compensation has to be performed in a prediction block generated based on a reference snapshot L1. Accordingly, the bidirectional weighted prediction for the current block can be performed based on weighted summation of a generated P0 prediction block based on a reference snapshot L0 and the generated illumination compensated prediction block (l * P1 + f). based on the L1 reference snapshot. [0629] It is also possible to perform bidirectional weighted prediction for the current block based on a lighting compensation weight used for lighting compensation. [0631] As an example, based on the lighting compensation weight, a weighted prediction parameter of the current block can be obtained to perform the bidirectional weighted prediction on the current block. At this time, the weighted prediction parameter w of the current block can be set to the same value as the lighting compensation weight l, or it can be set to (1-l). For example, when the lighting compensation based on the lighting compensation weight l is applied to the generated prediction block based on the reference snapshot L0, the bidirectional weighted prediction for the current block can be calculated based on the following equation 8. [0633] [Equation 8] [0635] W = / X P 0W / (1 - /) ^ 1 (t) [0637] As an example, it is also possible to perform the bidirectional weighted prediction of the current block by applying a weight determined by the weighted prediction parameter to one of the plurality of prediction blocks and applying the illumination compensation weight to the other. For example, the bidirectional weighted prediction for the current block can be calculated based on Equation 9 below. [0639] [Equation 9] [0644] Although the above-described embodiments have been described based on a series of steps or flow charts, they do not limit the time series order of the invention, and can be performed simultaneously or in different orders as required. In addition, each of the components (for example, units, modules, etc.) constituting the block diagram in the above-described embodiments can be implemented by a hardware or software device and a plurality of components. Or a plurality of components can be combined and implemented by a single hardware or software device. The above-described embodiments can be implemented in the form of program instructions that can be executed through various computer components and recorded on a computer-readable recording medium. The computer-readable recording medium can include one of or a combination of program commands, data files, data structures, and the like. Examples of computer-readable media include magnetic media such as hard drives, floppy discs, and magnetic tape, optical recording media such as CD-ROM and DVD, magneto-optical media such as optical floppy discs, specifically configured media and hardware devices. to store and execute program instructions such as ROM, RAM, flash memory, and the like. The hardware device can be configured to operate as one or more software modules to carry out the process according to the present invention and vice versa. [0646] Industrial applicability [0648] The present invention can be applied to electronic devices that are capable of encoding / decoding a video. 7
权利要求:
Claims (9) [1] 1. A method of decoding a video signal, comprising the method: obtaining a first motion vector of the current block, the first motion vector being obtained by adding a first motion vector difference value and a first motion vector prediction value; obtaining a second motion vector of the current block, the second motion vector being obtained by adding a second motion vector difference value and a second motion vector prediction value; generating a first prediction sample for the current block using the first motion vector and a first reference image of the current block; generating a second prediction sample for the current block using the second motion vector and a second reference image of the current block; determining a first weight and a second weight based on index information analyzed from a bit stream; obtaining a third prediction sample from the current block by applying the first weight to the first prediction sample and the second weight to the second prediction sample; Y obtain a reconstruction sample by adding the third prediction sample and a residual sample, where the index information specifies one of a plurality of candidate weight prediction parameters, and where the maximum bit length of the index information is determined based on temporal addresses of the first reference image and the second reference image. [2] The method of claim 1, wherein the second weight is determined as the same as one of the weighted prediction parameter candidates specified by the index information, and the first weight is derived by subtracting the second weight from a constant value. . [3] The method of claim 1, wherein the maximum bit length of the index information is determined whether the first reference image and the second reference image are before or after the current image. [4] 4. The method of claim 1, wherein the maximum bit length of the information The index is different when both the first reference image and the second reference image are before or after the current image and when one of the first reference image and the second reference image is before the current image while the other it is later than the current image. [5] 5. A method for encoding a video signal, comprising the method: obtaining a first motion vector of the current block, a first motion vector difference value specifying a difference between the first motion vector and a first motion vector prediction value that is encoded in a bit stream; obtaining a second motion vector of the current block, a second motion vector difference value specifying a difference between the second motion vector and a second motion vector prediction value that is encoded in the bit stream; generating a first prediction sample for the current block using the first motion vector and a first reference image of the current block; generating a second prediction sample for the current block using the second motion vector and a second reference image of the current block; determining a first weight and a second weight, wherein index information for determining the first weight and the second weight is encoded in the bit stream; obtaining a third prediction sample from the current block by applying the first weight to the first prediction sample and the second weight to the second prediction sample; Y obtain a residual sample by subtracting the third prediction sample from an original sample, where the maximum bit length of the index information is determined based on temporal addresses of the first reference image and the second reference image. [6] The method of claim 5, wherein the second weight is determined to be the same as one of the weighted prediction parameter candidates specified by the index information, and where the first weight is obtained by subtracting the second weight from a constant value. [7] The method of claim 5, wherein the maximum bit length of the index information is determined whether the first reference image and the second reference image are before or after the current image. [8] The method of claim 5, wherein the maximum bit length of the index information is different when both the first reference image and the second reference image are before or after the current image and when one of the first reference image and the second reference image is before the current image while the other is after the current image. [9] 9. A non-transient computer-readable medium for storing data associated with a video signal, comprising: a data stream stored on the non-transient computer-readable medium, the data stream being decoded by an encoding method comprising: obtaining a first motion vector of the current block, a first different value of the motion vector that specifies a difference between the first motion vector and a first prediction value of the motion vector that is encoded in a bit stream; obtaining a second motion vector of the current block, a second motion vector difference value specifying a difference between the second motion vector and a second motion vector prediction value that is encoded in the bit stream; generating a first prediction sample for the current block using the first motion vector and a first reference image of the current block; generating a second prediction sample for the current block using the second motion vector and a second reference image of the current block; determining a first weight and a second weight, where index information for determining the first weight and the second weight is encoded in the bit stream; obtaining a third prediction sample from the current block by applying the first weight to the first prediction sample and the second weight to the second prediction sample; Y obtain a residual sample by subtracting the third prediction sample from an original sample, where the maximum bit length of the index information is determined based on temporal directions of the first reference image and the second reference image. 1
类似技术:
公开号 | 公开日 | 专利标题 ES2737874B2|2020-10-16|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL ES2703607B2|2021-05-13|Method and apparatus for processing video signals ES2739668B1|2021-12-03|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS ES2800509B1|2021-12-21|METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNALS ES2724570A2|2019-09-12|Method and apparatus for processing video signals | ES2724568B2|2021-05-19|Method and apparatus for processing a video signal ES2710234B1|2020-03-09|Procedure and device for processing video signals ES2688624A2|2018-11-05|Method and apparatus for processing video signal ES2711474A2|2019-05-03|Method and device for processing video signal ES2737843B2|2021-07-15|METHOD AND APPARATUS TO PROCESS A VIDEO SIGNAL ES2677193B1|2019-06-19|Procedure and device to process video signals ES2737845B2|2021-05-19|METHOD AND APPARATUS TO PROCESS VIDEO SIGNAL ES2703458A2|2019-03-08|Video signal processing method and device ES2711223A2|2019-04-30|Method and device for processing video signal ES2711230A2|2019-04-30|Method and apparatus for processing video signal ES2711473A2|2019-05-03|Method and apparatus for processing video signal ES2711209A2|2019-04-30|Method and device for processing video signal
同族专利:
公开号 | 公开日 EP3484160A1|2019-05-15| ES2737874B2|2020-10-16| ES2737874R1|2020-05-08| CN109417641A|2019-03-01| ES2699748A2|2019-02-12| ES2737874A2|2020-01-16| WO2018008906A1|2018-01-11| ES2699748B2|2021-05-13| ES2699748R1|2019-04-05| KR20180005121A|2018-01-15| EP3484160A4|2019-12-25| US20190246133A1|2019-08-08|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 CN101917621A|2003-08-26|2010-12-15|汤姆森特许公司|Method and apparatus for decoding hybrid intra-inter coded blocks| US20130051467A1|2011-08-31|2013-02-28|Apple Inc.|Hybrid inter/intra prediction in video coding systems| US9906786B2|2012-09-07|2018-02-27|Qualcomm Incorporated|Weighted prediction mode for scalable video coding| US9374578B1|2013-05-23|2016-06-21|Google Inc.|Video coding using combined inter and intra predictors| WO2015057039A1|2013-10-18|2015-04-23|엘지전자 주식회사|Method predicting view synthesis in multi-view video coding and method for constituting merge candidate list by using same| EP3128754B1|2014-03-31|2020-11-11|Samsung Electronics Co., Ltd.|Interlayer video decoding method for performing sub-block-based prediction and apparatus therefor, and interlayer video encoding method for performing sub-block-based prediction and apparatus therefor| US10271064B2|2015-06-11|2019-04-23|Qualcomm Incorporated|Sub-prediction unit motion vector prediction using spatial and/or temporal motion information|KR20180005119A|2016-07-05|2018-01-15|주식회사 케이티|Method and apparatus for processing a video signal| WO2018056763A1|2016-09-23|2018-03-29|엘지전자|Method and apparatus for performing prediction using template-based weight| EP3518536A1|2018-01-26|2019-07-31|Thomson Licensing|Method and apparatus for adaptive illumination compensation in video encoding and decoding| US10735721B2|2018-04-17|2020-08-04|Panasonic Intellectual Property Corporation Of America|Encoder, decoder, encoding method, and decoding method using local illumination compensation| WO2020076143A1|2018-10-12|2020-04-16|주식회사 윌러스표준기술연구소|Video signal processing method and apparatus using multi-assumption prediction| EP3869800A4|2018-11-08|2021-12-08|Guangdong Oppo Mobile Telecommunications Corp., Ltd.|Method for encoding/decoding image signal and device therefor| WO2020141816A1|2018-12-31|2020-07-09|한국전자통신연구원|Image encoding/decoding method and device, and recording medium in which bitstream is stored| US11206396B2|2019-01-16|2021-12-21|Qualcomm Incorporated|Local illumination compensation in video coding| KR20210034534A|2019-09-20|2021-03-30|한국전자통신연구원|Method and apparatus for encoding/decoding image and recording medium for storing bitstream| CN113875235A|2019-09-23|2021-12-31|韩国电子通信研究院|Image encoding/decoding method and apparatus, and recording medium storing bit stream|
法律状态:
2021-01-21| BA2A| Patent application published|Ref document number: 2802817 Country of ref document: ES Kind code of ref document: A2 Effective date: 20210121 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 KR20160085015|2016-07-05| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|